dialogue structure
Conversational DNA: A New Visual Language for Understanding Dialogue Structure in Human and AI
What if the patterns hidden within dialogue reveal more about communication than the words themselves? We introduce Conversational DNA, a novel visual language that treats any dialogue -- whether between humans, between human and AI, or among groups -- as a living system with interpretable structure that can be visualized, compared, and understood. Unlike traditional conversation analysis that reduces rich interaction to statistical summaries, our approach reveals the temporal architecture of dialogue through biological metaphors. Linguistic complexity flows through strand thickness, emotional trajectories cascade through color gradients, conversational relevance forms through connecting elements, and topic coherence maintains structural integrity through helical patterns. Through exploratory analysis of therapeutic conversations and historically significant human-AI dialogues, we demonstrate how this visualization approach reveals interaction patterns that traditional methods miss. Our work contributes a new creative framework for understanding communication that bridges data visualization, human-computer interaction, and the fundamental question of what makes dialogue meaningful in an age where humans increasingly converse with artificial minds.
Dialogue-Based Multi-Dimensional Relationship Extraction from Novels
Yan, Yuchen, Zhao, Hanjie, Zhu, Senbin, Liu, Hongde, Zhang, Zhihong, Jia, Yuxiang
Relation extraction is a crucial task in natural language processing, with broad applications in knowledge graph construction and literary analysis. However, the complex context and implicit expressions in novel texts pose significant challenges for automatic character relationship extraction. This study focuses on relation extraction in the novel domain and proposes a method based on Large Language Models (LLMs). By incorporating relationship dimension separation, dialogue data construction, and contextual learning strategies, the proposed method enhances extraction performance. Leveraging dialogue structure information, it improves the model's ability to understand implicit relationships and demonstrates strong adaptability in complex contexts. Additionally, we construct a high-quality Chinese novel relation extraction dataset to address the lack of labeled resources and support future research. Experimental results show that our method outperforms traditional baselines across multiple evaluation metrics and successfully facilitates the automated construction of character relationship networks in novels.
CTRLStruct: Dialogue Structure Learning for Open-Domain Response Generation
Yin, Congchi, Li, Piji, Ren, Zhaochun
Dialogue structure discovery is essential in dialogue generation. Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses. However, most previous work focused on dialogue structure learning in task-oriented dialogue other than open-domain dialogue which is more complicated and challenging. In this paper, we present a new framework CTRLStruct for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information. Precisely, dialogue utterances encoded by bi-directional Transformer are further trained through a special designed contrastive learning task to improve representation. Then we perform clustering to utterance-level representations and form topic-level clusters that can be considered as vertices in dialogue structure graph. The edges in the graph indicating transition probability between vertices are calculated by mimicking expert behavior in datasets. Finally, dialogue structure graph is integrated into dialogue model to perform controlled response generation. Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models, as well as outperform some typical sentence embedding methods in dialogue utterance representation. Code is available in GitHub.
- North America > United States > Texas > Travis County > Austin (0.15)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (20 more...)
- Education (0.46)
- Consumer Products & Services (0.46)
Predicting Corporate Risk by Jointly Modeling Company Networks and Dialogues in Earnings Conference Calls
Earnings conference calls are significant information events for volatility forecasting, which is essential for financial risk management and asset pricing. Although some recent volatility forecasting models have utilized the textual content of conference calls, the dialogue structures of conference calls and company relationships are almost ignored in extant literature. To bridge this gap, we propose a new model called Temporal Virtual Graph Neural Network (TVGNN) for volatility forecasting by jointly modeling conference call dialogues and company networks. Our model differs from existing models in several important ways. First, we propose to exploit more dialogue structures by encoding position, utterance, speaker role, and Q\&A segments. Second, we propose to encode the market states for volatility forecasting by extending the Gated Recurrent Units (GRU). Third, we propose a new method for constructing temporal company networks in which the messages can only flow from temporally preceding to successive nodes, and extend the Graph Attention Networks (GAT) for modeling company relationships. We collect conference call transcripts of S\&P500 companies from 2008 to 2019, and construct a dataset of conference call dialogues with additional information on dialogue structures and company networks. Empirical results on our dataset demonstrate the superiority of our model over competitive baselines for volatility forecasting. We also conduct supplementary analyses to examine the effectiveness of our model's key components and interpretability.
- Research Report (1.00)
- Financial News (0.91)
- Information Technology (1.00)
- Banking & Finance > Trading (1.00)
DSBERT:Unsupervised Dialogue Structure learning with BERT
Chen, Bingkun, Dai, Shaobing, Zheng, Shenghua, Liao, Lei, Li, Yang
Unsupervised dialogue structure learning is an important and meaningful task in natural language processing. The extracted dialogue structure and process can help analyze human dialogue, and play a vital role in the design and evaluation of dialogue systems. The traditional dialogue system requires experts to manually design the dialogue structure, which is very costly. But through unsupervised dialogue structure learning, dialogue structure can be automatically obtained, reducing the cost of developers constructing dialogue process. The learned dialogue structure can be used to promote the dialogue generation of the downstream task system, and improve the logic and consistency of the dialogue robot's reply.In this paper, we propose a Bert-based unsupervised dialogue structure learning algorithm DSBERT (Dialogue Structure BERT). Different from the previous SOTA models VRNN and SVRNN, we combine BERT and AutoEncoder, which can effectively combine context information. In order to better prevent the model from falling into the local optimal solution and make the dialogue state distribution more uniform and reasonable, we also propose three balanced loss functions that can be used for dialogue structure learning. Experimental results show that DSBERT can generate a dialogue structure closer to the real structure, can distinguish sentences with different semantics and map them to different hidden states.
Structured Attention for Unsupervised Dialogue Structure Induction
Qiu, Liang, Zhao, Yizhou, Shi, Weiyan, Liang, Yuan, Shi, Feng, Yuan, Tao, Yu, Zhou, Zhu, Song-Chun
Inducing a meaningful structural representation from one or a set of dialogues is a crucial but challenging task in computational linguistics. Advancement made in this area is critical for dialogue system design and discourse analysis. It can also be extended to solve grammatical inference. In this work, we propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion. Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias. Experiments show that on two-party dialogue datasets, VRNN with structured attention learns semantic structures that are similar to templates used to generate this dialogue corpus. While on multi-party dialogue datasets, our model learns an interactive structure demonstrating its capability of distinguishing speakers or addresses, automatically disentangling dialogues without explicit human annotation.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (8 more...)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.91)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.89)
- (2 more...)
Social Bots: The Emerging Social AI Market – IQT – Medium
Mobile messaging platforms and social networking services combined with recent advancements in natural language understanding and processing (NLU/P) have helped create an emerging market for social bots. Today, social bots can do everything from manage your calendar to order you an Uber or provide fashion advice in the course of a conversation. As social bots become intertwined, and integral, within social network services and everyday digital interactions it's quite likely that conversational social bots will be the first exposure many people have with anything approximating artificial intelligence (AI). Looking back, the origin of social bots is inexorably rooted in the development and adoption of social networking services and messaging platforms spanning several decades. Social bots found on platforms such as Facebook Messenger, Telegram, and Kik today share distinctive pedigree with party-line Bulletin Board Systems (BBS), CompuServe, and AOL.
Recognizing Effective and Student-Adaptive Tutor Moves in Task-Oriented Tutorial Dialogue
Mitchell, Christopher Michael (North Carolina State University) | Ha, Eun Young (North Carolina State University) | Boyer, Kristy Elizabeth (North Carolina State University) | Lester, James C. (North Carolina State University)
One-on-one tutoring is significantly more effective than traditional classroom instruction. In recent years, automated tutoring systems are approaching that level of effectiveness by engaging students in rich natural language dialogue that contributes to learning. A promising approach for further improving the effectiveness of tutorial dialogue systems is to model the differential effectiveness of tutorial strategies, identifying which dialogue moves or combinations of dialogue moves are associated with learning. It is also important to model the ways in which experienced tutors adapt to learner characteristics. This paper takes a corpus- based approach to these modeling tasks, presenting the results of a study in which task-oriented, textual tutorial dialogue was collected from remote one-on-one human tutoring sessions. The data reveal patterns of dialogue moves that are correlated with learning, and can directly inform the design of student-adaptive tutorial dialogue management systems.
- North America > United States > North Carolina > Wake County > Raleigh (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.47)
- Research Report > Experimental Study (0.47)